7 research outputs found

    Data Processing Bounds for Scalar Lossy Source Codes with Side Information at the Decoder

    Full text link
    In this paper, we introduce new lower bounds on the distortion of scalar fixed-rate codes for lossy compression with side information available at the receiver. These bounds are derived by presenting the relevant random variables as a Markov chain and applying generalized data processing inequalities a la Ziv and Zakai. We show that by replacing the logarithmic function with other functions, in the data processing theorem we formulate, we obtain new lower bounds on the distortion of scalar coding with side information at the decoder. The usefulness of these results is demonstrated for uniform sources and the convex function Q(t)=t1−αQ(t)=t^{1-\alpha}, α>1\alpha>1. The bounds in this case are shown to be better than one can obtain from the Wyner-Ziv rate-distortion function.Comment: 35 pages, 9 figure

    Universal Quantization for Separate Encodings and Joint Decoding of Correlated Sources

    Full text link
    We consider the multi-user lossy source-coding problem for continuous alphabet sources. In a previous work, Ziv proposed a single-user universal coding scheme which uses uniform quantization with dither, followed by a lossless source encoder (entropy coder). In this paper, we generalize Ziv's scheme to the multi-user setting. For this generalized universal scheme, upper bounds are derived on the redundancies, defined as the differences between the actual rates and the closest corresponding rates on the boundary of the rate region. It is shown that this scheme can achieve redundancies of no more than 0.754 bits per sample for each user. These bounds are obtained without knowledge of the multi-user rate region, which is an open problem in general. As a direct consequence of these results, inner and outer bounds on the rate-distortion achievable region are obtained.Comment: 24 pages, 1 figur

    Efficient On-line Schemes for Encoding Individual Sequences with Side Information at the Decoder ∗

    No full text
    We present adaptive on-line schemes for lossy encoding of individual sequences under the conditions of the Wyner-Ziv (WZ) problem, i.e., the decoder has access to side information whose statistical dependency on the source is known. Both the source sequence and the side information consist of symbols taking on values in a finite alphabet X. In the first part of this article, a set of fixed-rate scalar source codes with zero delay is presented. We propose a randomized on-line coding scheme, which achieves asymptotically (and with high probability), the performance of the best source code in the set, uniformly over all source sequences. The scheme uses the same rate and has zero delay. We then present an efficient algorithm for implementing our on-line coding scheme in the case of a relatively small set of encoders. We also present an efficient algorithm for the case of a larger set of encoders with a structure, using the method of the weighted graph and the Weight Pushing Algorithm (WPA). In the second part of this article, we extend our results to the case of variable-rate coding. A set of variable-rate scalar source codes is presented. We generalize the randomized on-line coding scheme, to our case. This time, the performance is measured by the Lagrangian Cost (LC), which is defined as a weighted sum of the distortion and the length of the encoded sequence. We present an efficient algorithm for implementing our on-line variable-rate coding scheme in the case of a relatively small set of encoders. We then consider the special case of lossless variable-rate coding. An on-line scheme which use Huffman codes is presented. We show that this scheme can be implemented efficiently using the same graphic methods from the first part. Combining the results from former sections, we build a generalized efficient algorithm for structured set of variable-rate encoders. Finally, we show how to generalize all the results to general distortion measures. The complexity of all the algorithms is no more than linear in the sequence length

    Universal Quantization for Separate Encodings and Joint Decoding of Correlated Sources

    No full text
    corecore